AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Sparse Activated Mixture of Experts

# Sparse Activated Mixture of Experts

Moe LLaVA StableLM 1.6B 4e
Apache-2.0
MoE-LLaVA is a large-scale vision-language model based on a mixture of experts architecture, achieving efficient multimodal learning through sparsely activated parameters.
Text-to-Image Transformers
M
LanguageBind
125
8
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase